Guide - Using AI Telephone Bots
AI Telephone Bots can run as voice agents or advanced IVR flows. They can gather data, call external systems, and perform call-control actions.
Contents
- Basic Configuration
- Models and Language
- Speech Input and Guarding
- Tools and Permissions
- Variables and Session Values
- Contexts, Steps, and Actions
- Webhooks
- RAG Search Tool
- Execution Notes
Basic Configuration
AI Bots are configured under console -> Stuff -> Add -> AI Bot.
A valid config must include a root description.
description: >
You are an AI telephone bot for Widgets Ltd.
Welcome the caller and collect their name.
initial: Thank you for calling Widgets Ltd, please tell me your name.
initial is optional. If set, it is spoken as the first assistant message for that scope (root/context/step).
Models and Language
If model is omitted (or unknown), the bot defaults to mistral-large-2512.
Currently supported model keys:
gpt-3.5-turbogpt-4ogpt-4o-minigpt-4.1gpt-4.1-minigpt-4.1-nanomistral-large-2512mistral-small-2506
temperature is clamped to 0..2.
Optional language should be an ISO 639 code and is used by speech-to-text hints.
Speech Input and Guarding
Speech-to-text can be configured with:
stt:
engine: voxtral # or openai-whisper
There is no twopass option in bot YAML.
For stricter turn-taking, use guard.in at root, context, or step scope (same precedence as other scoped settings: step -> context -> root).
guard.in supports:
description: instruction text for the guard classifier (JEXL templates supported)model: optional model override for guard classification (defaults tomistral-small-2506)allow: array of free-text allow rules (JEXL templates supported)action: what to do when input is classified bad
Action forms:
guard:
in:
action:
reprompt: Sorry, I did not catch that. ${{ prompt }}
or explicit tool form:
guard:
in:
action:
tool: reprompt # or finish, hangup
final: Sorry, I did not catch that. ${{ prompt }}
Recommended example:
guard:
in:
description: >
Check caller input against the current prompt.
Be lenient to natural phrasing.
model: mistral-small-latest
allow:
- Caller answers the prompt
- Caller asks to speak with a person
action:
reprompt: Sorry, I did not catch that. ${{ prompt }}
Runtime behavior:
- The spoken prompt is always what is passed to STT.
- If
guard.inis not configured, input is accepted normally. - If
guard.inis configured, a second AI classification pass is run on each captured user input. - Guard classifier failures are fail-closed (treated as bad input).
- On bad input,
guard.in.actionruns (reprompt,finish, orhangup).
Tools and Permissions
Built-in tools:
send_smssend_sms_callerjump_extensionforward_callhangupfinish
Example:
tools:
send_sms:
destinations:
- 447700900123
jump_extension:
extensions:
- "1000"
- "1001"
hangup: true
finish: true
Permission resolution order is:
- step-level tools
- context-level tools
- root-level tools
A tool explicitly set to false at a narrower scope is denied even if enabled elsewhere.
hangup and finish can be either:
true(AI providesfinalmessage)- object with
final(fixed message enforced by config)
Example fixed final:
tools:
hangup:
final: Thank you for calling. Goodbye.
Variables and Session Values
Template format is ${{ ... }}.
Runtime vars available under var:
var.nowvar.uuidvar.callerid
Date/time helper functions are available in JEXL compute/template expressions:
now()now("YYYY-MM-DD HH:mm:ss")now("YYYY-MM-DD HH:mm:ss", "UTC")formatdatetime(input, "YYYY-MM-DD")formatdatetime()(defaults to current date/time)yearssince(input)
Useful formatdatetime tokens:
YYYY,YYMM,MDD,DHH,Hmm,mss,sMMM,MMMMddd,dddd
Examples:
session:
today_utc:
compute: now("YYYY-MM-DD", "UTC")
timestamp:
compute: formatdatetime()
patient_dob_display:
compute: formatdatetime(session.dob, "ddd, DD MMM YYYY")
Session values are under session.
You can pre-populate session values in config, including templates and compute expressions:
session:
enquirer_telephone: ${{var.callerid}}
callerid_len:
compute: var.callerid.length
Within logic, last contains the most recent webhook result (e.g. last.success).
Contexts, Steps, and Actions
Use start to set the first context.
start: intake
contexts:
intake:
description: Collect caller details.
Context Switching
Context switching is controlled by allowed context lists:
contexts.<name>.contextssteps[].contexts
When switching, session values defined in collect are saved.
Steps
Steps are ordered and run one at a time.
contexts:
intake:
description: Ask one question at a time.
steps:
- initial: What is your first and last name?
collect:
first_name:
description: Caller first name
last_name:
description: Caller last name
- description: What is your date of birth?
collect:
dob:
description: Date of birth in YYYY-MM-DD
Important:
- A string step (e.g.
- "ask name") is treated as a stepdescription. - It is not converted to
initial.
when is supported on contexts and steps.
goto is supported on steps for context jumps:
- goto: another_context
Entry Actions
Contexts and steps can define action blocks that run immediately on entry (before normal AI turn):
action:
webhook: submit_case
or
action:
tool: hangup
final: We have what we need. Goodbye.
Supported action targets:
tool: hanguptool: finishwebhook: <name>
Webhooks
Webhooks are function tools the AI can call.
Required webhook keys:
descriptionurlfields
Example:
webhooks:
submit_case:
description: Send case to CRM
url: https://example.com/api/cases
method: POST
content_type: application/json
expect:
status: 200
content_type: application/json
headers:
Authorization: Bearer ${{secret.crm_token}}
fields:
callerid:
type: string
value: ${{var.callerid}}
dob:
type: string
description: Date of birth in YYYY-MM-DD
age:
compute: yearssince(session.dob)
Notes:
- Default method is
POST. - Default request
content_typeisapplication/json. - Supported request bodies: JSON and
application/x-www-form-urlencoded. - For JSON payloads,
pathcan map nested objects. required: falseomits missing/empty values.expect.statusmust match exactly when provided.- Without
expect, success defaults to HTTP200or202.
Webhook Scope by Context/Step
Webhook exposure can be restricted by:
- context-level
webhooks - step-level
webhooks(array or object allow/deny)
This lets you grant webhook access only where needed.
Mailto Webhooks
url: mailto:someone@example.com is supported.
For mailto webhooks, you can define subject and body as template strings or compute objects.
webhooks:
submitprescription:
description: Email prescription request
url: mailto:ops@example.com
subject: New prescription request
body: |
Caller: ${{ session.first_name }} ${{ session.last_name }}
Telephone: ${{ session.enquirer_telephone }}
Recording: https://www.babblevoice.com/a/callexplorer?u=${{ var.uuid }}
Transcript:
${{ fields.data }}
fields:
data:
compute: >
messages|chattext({ caller: session.first_name, assistant: "Bot" })
RAG Search Tool
You can expose a search tool that queries MiniRAG.
tools:
search:
- url: kb://prescriptions
purpose: NHS prescription policy documents
The AI then receives a search(url, query) function constrained to configured URLs.
Execution Notes
- Root system prompt is always based on root
description. - Active context
descriptionis appended when in a context. - Active step
descriptionis appended when in steps. - Initial prompt precedence is:
- current step
initial - current context
initial - root
initial(only when not in a context) - On context switch or step completion, message history is sliced to the new base index to keep prompts focused.
- AI tool loops and entry-action loops are capped to avoid runaway behavior.
- Guard input retries are limited per prompt to avoid infinite reprompt loops.
Practical Recommendation
Keep permissions tight:
- expose only required tools
- expose only required webhooks per context/step
- prefer deterministic
action+whenfor critical workflow transitions